Martin Pluisch

Hi, I'm Martin Pluisch đź‘‹

Ph.D. Student in Computer Science
TU Darmstadt
<

Research Interests





My research focuses on creating more intelligent and contextually aware augmented reality systems that seamlessly adapt to the complex interplay between users’ cognitive states and their ever-changing physical environments. Ultimately, I envision augmented reality not as a passive overlay of digital content, but as an anticipatory, perceptually aligned medium that seamlessly adjusts to real-world complexity.



Publications in 2025



  • IEEE ISMAR, Daejeon, South Korea, October 08–12, 2025
    M. PluischTU Darmstadt J. GugenheimerTU Darmstadt Y. ChoUCL London S. JulierUCL London E. KruijffBRSU
    Augmented reality displays are becoming more powerful and simultaneously more mobile. Although mobile AR is gaining popularity, it remains difficult to get an insight on users' cognitive load, although this can be relevant for many mobile-based tasks. Usually, cognitive load is measured via subjective, task-disruptive self-measures (questionnaires) like NASA TLX (task load). Although biosensing methods have previously been deployed (e.g., GSR, Pulse, HRV) to get more objective measures of cognitive load, they are (highly) susceptible to noise due to user movement. More reliable methods like EEG exist, but they are not suitable for real-world scenarios and mobile setups.

    In this paper, we report on a non-contact multi-sensor approach for cognitive load assessment for mobile AR. Our approach combines pupillometry, facial action coding system-based facial feature tracking, and thermal imaging for respiratory rate analysis. Within the frame of our study, we analysed the aptness of the methods, comparing load assessment for low and high cognitive load tasks, while being static (seated, baseline) and moving.

    Using XGBoost, our model achieved 86.11% accuracy for binary cognitive load assessment (low vs. high) and 84.24% accuracy for four-way classification (cognitive load Ă— mobility). Feature importance analysis revealed that robust predictors included gaze dynamics (e.g., fixation, pursuit, and saccade durations), pupil diameter metrics (such as FFT band power and variability measures), and facial and respiratory features (including brow lowering and nostril temperature quantiles) for assessing cognitive load in mobile AR.
  • ACM SUI, Montreal, Canada, November 10–11, 2025
    M. PluischTU Darmstadt J. GugenheimerTU Darmstadt E. KruijffBRSU
    Augmented reality displays are evolving into devices that can seamlessly integrate virtual content into users' physical environments in a context-sensitive manner, promising to support users more fluidly in their tasks. However, dynamically adapting applications to the user's environment presents challenges that require a real-time understanding of the user's current context. While recent computer vision approaches for context detection have made progress, they often lack a semantic understanding of the interplay between the user, their surroundings, and the system state.

    In this paper, we investigate how context-aware application recommendations affects user experience and task performance in AR. To this end, we evaluate a real-time system that uses a large multimodal model integrated with a Microsoft HoloLens 2. The system processes application metadata and front-facing camera images to recommend contextually relevant apps. In a user study, we compared three interaction modes for selecting apps: a fully automatic mode that proactively switches to the suggested app, a recommendation-based mode that highlights a suggested app without launching it, and a manual mode requiring user selection.

    Our study showed that automatic context-aware app switching significantly improved task efficiency and reduced cognitive load compared to the other modes, without diminishing users' sense of control. Participants preferred the automated mode, which enabled smoother workflows and enhanced usability.
  • ACM SUI, Montreal, Canada, November 10–11, 2025
    M. PluischTU Darmstadt S. BatemanUNB A. HinkenjannBRSU E. KruijffBRSU
    While augmented reality head-mounted displays (AR-HMDs) have advanced in recent years, they still suffer from technical limitations that can affect interaction. One important limiting factor is the narrow field of view (FOV) of AR-HMDs, which can result in AR content being positioned off-screen.

    To address this challenge, researchers have explored providing guidance to off-screen objects. However, because the cameras on many AR-HMDs can track users’ hands in a range that extends well beyond the display FOV they afford interacting with off-screen objects, which could provide improved performance over simple off-screen guidance.

    In this paper, we present three novel methods for enabling facilitated interaction with objects outside the FOV when using AR-HMDs. In our first user study, we evaluated our methods in terms of interaction accuracy, speed, and overall preference to assess their feasibility. In a second study, we then compared the best-performing method from the initial study to two baseline approaches.

    Our results indicate that our methods can successfully aid users in off-screen interactions, partially compensating for the drawbacks inherently associated with a narrow display FOV in augmented reality environments.


Education





Previous Work





GitHub Activity




 


2025